97 research outputs found

    Online feature extraction based on accelerated kernel principal component analysis for data stream

    Get PDF
    Kernel principal component analysis (KPCA) is known as a nonlinear feature extraction method. Takeuchi et al. have proposed an incremental type of KPCA (IKPCA) that can update an eigen-space incrementally for a sequence of data. However, in IKPCA, the eigenvalue decomposition should be carried out for every single data, even though a chunk of data is given at one time. To reduce the computational costs in learning chunk data, this paper proposes an extended IKPCA called Chunk IKPCA (CIKPCA) where a chunk of multiple data is learned with single eigenvalue decomposition. For a large data chunk, to reduce further computation time and memory usage, it is first divided into several smaller chunks, and only useful data are selected based on the accumulation ratio. In the proposed CIKPCA, a small set of independent data are first selected from a reduced set of data so that eigenvectors in a high-dimensional feature space can be represented as a linear combination of such independent data. Then, the eigenvectors are incrementally updated by keeping only an eigenspace model that consists of the sextuplet such as independent data, coefficients, eigenvalues, and mean information. The proposed CIKPCA can augment an eigen-feature space based on the accumulation ratio that can also be updated without keeping all the past data, and the eigen-feature space is rotated by solving an eigenvalue problem once for each data chunk. The experiment results show that the learning time of the proposed CIKPCA is greatly reduced as compared with KPCA and IKPCA without sacrificing recognition accuracy

    In-situ Neutron Tomography on Mixing Behavior of Supercritical Water and Room Temperature Water in a Tubular Flow Reactor

    Get PDF
    We have synthesized metal oxide nanoparticles through hydrothermal reaction at around 400 °C and 25 MPa by mixing the stream of metal ion solution at room temperature with another stream of supercritical water in a continuous flow-type reactor. In order to visualize the mixing behavior of the two streams, we performed neutron tomography measurements. By performing tomography measurements while rotating the mixing piece with supplying supercritical water and room temperature water, we succeeded in obtaining the three dimensional distribution of neutron attenuation. The results clearly showed how the two streams mix, which serves as a reference for numerical simulation

    Online Feature Extraction Algorithms for Data Streams

    No full text

    Effects of short-term denervation and subsequent reinnervation on motor endplates and the soleus muscle in the rat

    Get PDF
    The rat sciatic nerve was locally frozen, and changes in the nerve, motor endplates, and the soleus muscle were examined for up to 6 weeks by light and electron microscopy. The wet weights of denervated soleus muscles compared with contralateral values progressively declined to a minimum at 2 weeks after injury (60.7±2.5%) and began to reverse following 3 weeks. The sciatic nerve thoroughly degenerated after freezing. However, numerous regenerated myelinated and thin nerve fibers were observed at 3 weeks. They were considerably enlarged but still smaller than normal counterparts at 6 weeks postoperatively. Nerve terminals containing synaptic vesicles of endplates disappeared at day 1 and mostly reappeared at 3 weeks (about 700f the endplates). All endplates examined were reinnervated at 4, 5, and 6 weeks. On the other hand, postsynaptic folds of muscle fibers seemed to be only slightly influenced by denervation or reinnervation. Ultrastructural alterations of myofibrils, in particular the loss of register, immediately appeared after denervation, spread progressively, peaked at 2 weeks, ameliorated following reinnervation, and became significantly normalized at 6 weeks after freezing. The proportion of type II fibers in the soleus muscle similary showed an increase and a decrease with a short delay in response to denervation and reinnervation, respectively. This study clearly demonstrated that the nerve supply affects the ultrastructural integrity of skeletal muscles. In addition, changes in the endplates and the soleus muscle evaluated in this study after short-term denervation are largely reversible following reinnervation

    A Face Recognition System Using Neural Networks with Incremental Learning Ability

    No full text
    This paper presents a fully automated face recognition system with incremental learning ability that has the following two desirable features: one-pass incremental learning and automatic generation of training data. As a classifier of face images, an evolving type of neural network called Resource Allocating Network with Long-Term Memory (RAN-LTM) is adopted here. This model enables us to realize efficient incremental learning without suffering from serious forgetting. In the face detection procedure, face localization is conducted based on the information of skin color and edges at first. Then, facial features are searched for within the localized regions using a Resource Allocation Network, and the selected features are used for in the construction of face candidates. After the face detection, the face candidates are classified using RAN-LTM. The incremental learning routine is applied to only misclassified data that are collected automatically in the recognition phase. Experimental results show that the recognition accuracy improves without increasing the false-positive rate even if the incremental learning proceeds. This fact suggests that incremental learning is a useful approach to face recognition tasks

    Incremental linear discriminant analysis for classification of data streams

    Get PDF
    This paper presents a constructive method for deriving an updated discriminant eigenspace for classification, when bursts of data that contains new classes is being added to an initial discriminant eigenspace in the form of random chunks. Basically, we propose an incremental linear discriminant analysis (ILDA) in its two forms: a sequential ILDA; and a Chunk ILDA. In experiments, we have tested ILDA using datasets with a small number of classes and smalldimensional features, as well as datasets with a large number of classes and large-dimensional features. We have compared the proposed ILDA against the traditional batch LDA in terms of discriminability, execution time and memory usage with the increasing volume of data addition. The results show that the proposed ILDA can effectively evolve a discriminant eigenspace over a fast and large data stream, and extract features with superior discriminability in classification, when compared with other methods
    corecore